382 research outputs found

    Emotional Attachment, Performance, and Viability in Teams Collaborating with Embodied Physical Action (EPA) Robots

    Get PDF
    Although different types of teams increasingly employ embodied physical action (EPA) robots as a collaborative technology to accomplish their work, we know very little about what makes such teams successful. This paper has two objectives: the first is to examine whether a team’s emotional attachment to its robots can lead to better team performance and viability; the second is to determine whether robot and team identification can promote a team’s emotional attachment to its robots. To achieve these objectives, we conducted a between-subjects experiment with 57 teams working with robots. Teams performed better and were more viable when they were emotionally attached to their robots. Both robot and team identification increased a team’s emotional attachment to its robots. Results of this study have implications for collaboration using EPA robots specifically and for collaboration technology in general

    Technology Affordances and IT Identity

    Get PDF
    The study attempts to understand the impact of technology affordances on identifying the self with technology (IT identity). Furthermore, it seeks to understand the role of experiences in mediating the relationship between technology affordances and IT identity. To answer our research questions, we will conduct a cross-sectional survey

    Facilitating Employee Intention to Work with Robots

    Get PDF
    Organizations are adopting and integrating robots to work with and alongside their human employees. However, their human employees are not necessarily happy about this new work arrangement. This may be in part due to the increasing fears that robots will eventually take their jobs. Organizations are now facing the challenge of integrating robots into their workforce by encouraging humans to work with their robotic teammates. To address this issue, this study employs similarity and attraction theory to encourage humans to work with and alongside their robotic co-worker. Our research model asserts that surface and deep level similarity with the robot will impact a human’s willingness to work with a robot. We also seek to examine whether risk moderates the importance of both surface and deep level similarity. To empirically examine this model, this proposal presents an experimental design. Results of the study should provide new insights into the benefits and limitations of similarity to encourage humans to work with and alongside their robot co-worker

    Shocking the Crowd: The Effect of Censorship Shocks on Chinese Wikipedia

    Full text link
    Collaborative crowdsourcing has become a popular approach to organizing work across the globe. Being global also means being vulnerable to shocks -- unforeseen events that disrupt crowds -- that originate from any country. In this study, we examine changes in collaborative behavior of editors of Chinese Wikipedia that arise due to the 2005 government censor- ship in mainland China. Using the exogenous variation in the fraction of editors blocked across different articles due to the censorship, we examine the impact of reduction in group size, which we denote as the shock level, on three collaborative behavior measures: volume of activity, centralization, and conflict. We find that activity and conflict drop on articles that face a shock, whereas centralization increases. The impact of a shock on activity increases with shock level, whereas the impact on centralization and conflict is higher for moderate shock levels than for very small or very high shock levels. These findings provide support for threat rigidity theory -- originally introduced in the organizational theory literature -- in the context of large-scale collaborative crowds

    ICIS 2019 SIGHCI Workshop Panel Report: Human– Computer Interaction Challenges and Opportunities for Fair, Trustworthy and Ethical Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) is rapidly changing every aspect of our society—including amplifying our biases. Fairness, trust and ethics are at the core of many of the issues underlying the implications of AI. Despite this, research on AI with relation to fairness, trust and ethics in the information systems (IS) field is still scarce. This panel brought together academia, business and government perspectives to discuss the challenges and identify potential solutions to address such challenges. This panel report presents eight themes based around the discussion of two questions: (1) What are the biggest challenges to designing, implementing and deploying fair, ethical and trustworthy AI?; and (2) What are the biggest challenges to policy and governance for fair, ethical and trustworthy AI? The eight themes are: (1) identifying AI biases; (2) drawing attention to AI biases; (3) addressing AI biases; (4) designing transparent and explainable AI; (5) AI fairness, trust, ethics: old wine in a new bottle?; (6) AI accountability; (7) AI laws, policies, regulations and standards; and (8) frameworks for fair, ethical and trustworthy AI. Based on the results of the panel discussion, we present research questions for each theme to guide future research in the area of human–computer interaction

    Examining the effects of emotional valence and arousal on takeover performance in conditionally automated driving

    Get PDF
    In conditionally automated driving, drivers have difficulty in takeover transitions as they become increasingly decoupled from the operational level of driving. Factors influencing takeover performance, such as takeover lead time and the engagement of non-driving-related tasks, have been studied in the past. However, despite the important role emotions play in human-machine interaction and in manual driving, little is known about how emotions influence drivers’ takeover performance. This study, therefore, examined the effects of emotional valence and arousal on drivers’ takeover timeliness and quality in conditionally automated driving. We conducted a driving simulation experiment with 32 participants. Movie clips were played for emotion induction. Participants with different levels of emotional valence and arousal were required to take over control from automated driving, and their takeover time and quality were analyzed. Results indicate that positive valence led to better takeover quality in the form of a smaller maximum resulting acceleration and a smaller maximum resulting jerk. However, high arousal did not yield an advantage in takeover time. This study contributes to the literature by demonstrating how emotional valence and arousal affect takeover performance. The benefits of positive emotions carry over from manual driving to conditionally automated driving while the benefits of arousal do not

    Considerations for Task Allocation in Human-Robot Teams

    Full text link
    In human-robot teams where agents collaborate together, there needs to be a clear allocation of tasks to agents. Task allocation can aid in achieving the presumed benefits of human-robot teams, such as improved team performance. Many task allocation methods have been proposed that include factors such as agent capability, availability, workload, fatigue, and task and domain-specific parameters. In this paper, selected work on task allocation is reviewed. In addition, some areas for continued and further consideration in task allocation are discussed. These areas include level of collaboration, novel tasks, unknown and dynamic agent capabilities, negotiation and fairness, and ethics. Where applicable, we also mention some of our work on task allocation. Through continued efforts and considerations in task allocation, human-robot teaming can be improved.Comment: Presented at AI-HRI symposium as part of AAAI-FSS 2022 (arXiv:2209.14292

    Mechanisms Underlying Social Loafing in Technology Teams: An Empirical Analysis

    Get PDF
    Prior research has identified team size and dispersion as important antecedents of social loafing in technology-enabled teams. However, the underlying mechanisms through which team size and team dispersion cause individuals to engage in social loafing is significantly understudied and needs to be researched. To address this exigency, we use Bandura’s Theory of Moral Disengagement to explain why individuals under conditions of increasing team size and dispersion engage in social loafing behavior. We identify three mechanisms—advantageous comparison, displacement of responsibility and moral justification —that mediate the relationship between team size, dispersion and social loafing. Herein, we present the theory development and arguments for our hypotheses. We also present the initial findings from this study. Implications of the expected research findings are also discussed

    Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload

    Full text link
    Explanations given by automation are often used to promote automation adoption. However, it remains unclear whether explanations promote acceptance of automated vehicles (AVs). In this study, we conducted a within-subject experiment in a driving simulator with 32 participants, using four different conditions. The four conditions included: (1) no explanation, (2) explanation given before or (3) after the AV acted and (4) the option for the driver to approve or disapprove the AV's action after hearing the explanation. We examined four AV outcomes: trust, preference for AV, anxiety and mental workload. Results suggest that explanations provided before an AV acted were associated with higher trust in and preference for the AV, but there was no difference in anxiety and workload. These results have important implications for the adoption of AVs.Comment: 42 pages, 5 figures, 3 Table

    Introduction to the Special Issue on AI Fairness, Trust, and Ethics

    Get PDF
    It is our pleasure to welcome you to this AIS Transactions on Human Computer Interaction special issue on artificial intelligence (AI) fairness, trust, and ethics. This special issue received research papers that unpacked the potential, challenges, impacts, and theoretical implications of AI. This special issue contains four papers that integrate research across diverse fields of study, such as social science, computer science, engineering, design, values, and other diverse topics related to AI fairness, trust, and ethics broadly conceptualized. This issue contains three of the four papers (along with a regular paper of the journal). The fourth or last paper of this special issue is forthcoming in March 2021. We hope that you enjoy these papers and, like us, look forward to similar research published in AIS Transactions on Human Computer Interaction
    • …
    corecore